1 |
Validation of a large-scale task-based test: functional progression in dialogic speaking performance ; Task-based language teaching and assessment: Contemporary reflections from across the world
|
|
|
|
BASE
|
|
Show details
|
|
2 |
The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards new avenues for the IELTS Speaking Test: insights from examiners’ voices
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
|
|
|
|
BASE
|
|
Show details
|
|
5 |
The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Task parallelness: investigating the difficulty of two spoken narrative tasks
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Investigating the use of language functions for validating speaking test specifications
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Exploring the use of video-conferencing technology to deliver the IELTS Speaking Test: Phase 3 technical trial
|
|
|
|
BASE
|
|
Show details
|
|
11 |
The IELTS Speaking Test: what can we learn from examiner voices?
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Academic speaking: does the construct exist, and if so, how do we test it?
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Developing rubrics to assess the reading-into-writing skills: a case study
|
|
|
|
Abstract:
The integrated assessment of language skills, particularly reading-into-writing, is experiencing a renaissance. The use of rating rubrics, with verbal descriptors that describe quality of L2 writing performance, in large scale assessment is well-established. However, less attention has been directed towards the development of reading-into-writing rubrics. The task of identifying and evaluating the contribution of reading ability to the writing process and product so that it can be reflected in a set of rating criteria is not straightforward. This paper reports on a recent project to define the construct of reading-into-writing ability for designing a suite of integrated tasks at four proficiency levels, ranging from CEFR A2 to C1. The authors discuss how the processes of theoretical construct definition, together with empirical analyses of test taker performance, were used to underpin the development of rating rubrics for the reading-into-writing tests. Methodologies utilised in the project included questionnaire, expert panel judgement, group interview, automated textual analysis and analysis of rater reliability. Based on the results of three pilot studies, the effectiveness of the rating rubrics is discussed. The findings can inform decisions about how best to account for both the reading and writing dimensions of test taker performance in the rubrics descriptors.
|
|
Keyword:
CEFR; integrated tasks; L2 writing; language assessment; Q110 Applied Linguistics; reading-into-writing; scoring; writing; writing assessment
|
|
URL: http://hdl.handle.net/10547/621934 https://doi.org/10.1016/j.asw.2015.07.004
|
|
BASE
|
|
Hide details
|
|
17 |
Exploring performance across two delivery modes for the same L2 speaking test: face-to-face and video-conferencing delivery: a preliminary comparison of test-taker and examiner behaviour
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Exploring performance across two delivery modes for the IELTS Speaking Test: face-to-face and video-conferencing delivery (Phase 2)
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Accuracy across proficiency levels: A learner corpus approach. Jennifer Thewissen. Presses Universitaires de Louvain, Lougain-la-Neuve, Belgium (2015). 342pp.
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A comparative study of the variables used to measure syntactic complexity and accuracy in task-based research
|
|
|
|
BASE
|
|
Show details
|
|
|
|